M2.5 Contracts Upgrade Plan

Motivation

With M2.5 (https://github.com/FilOzone/filecoin-services/issues/147) landing soon, we need to prepare FilCDN for upgrading as quickly as possible, so that the CDN integration still works with the current to-be-used FOC contracts.

Changed components

⚠️

All contracts will be upgraded without migrating their previous state, so all old FilCDN deals need to be expired

FilecoinWarmStorageService

PDPVerifier

Open PRs

Pull requests that are expected to land for M2.5.

⚠️

These might still change before landing

FilecoinWarmStorageService

ServiceRegistry

  • A new Service Registry Contract for Filecoin Synapse https://github.com/FilOzone/filecoin-services/pull/142
    • Contract ServiceProviderRegistry
    • Event ProviderRegistered
      • Argument providerId (new provider ID)
      • Argument owner (replaces previous provider argument)
    • Event ProviderRemoved
      • Argument providerId
    • Event ProductRemoved
      • Argument providerId
      • Argument productType
    • Method getPDPProduct(providerId)
      • Includes previous serviceURL in return value
      • Should be called after event ProductUpdated or ProductAdded

Components to change

⚠️

In addition to all changes outlined, we need a mainnet deployment in addition to the existing calibration deployment. Let’s do this once the contracts are live.

Goldsky

Subgraphs

  • Deploy FilecoinWarmStorageService subgraph from repo
    • Wait for available
  • Deploy PDPVerifier subgraph from repo
    • Wait for available
  • Deploy ServiceProviderRegistry subgraph from repo
    • Wait for available

Pipelines

  • FilecoinWarmStorageService
    • Wait for subgraph to be deployed
    • Forward event DataSetCreated to /filecoin-warm-storage-service/data-set-created
  • PDPVerifier
    • Wait for subgraph to be deployed
    • Forward event DataSetCreated to /pdp-verifier/data-set-created
    • Forward event PiecesAdded to /pdp-verifier/pieces-added
    • Forward event PiecesRemoved to /pdp-verifier/pieces-removed
  • ServiceProviderRegistry
    • Wait for subgraph to be deployed
    • Forward event ProviderRegistered to /service-provider-registry/provider-registered
    • Forward event ProviderRemoved to /service-provider-registry/provider-removed
    • Forward event ServiceUpdated to /service-provider-registry/service-updated
    • Forward event ProductAdded to /service-provider-registry/product-added
    • Forward event ProductRemoved to /service-provider-registry/product-removed

D1 Tables (PR #1)

https://github.com/filcdn/worker/pull/224
  • Switch to new databases
    • And if necessary flush old rows
  • provider_urls
    • Rename piece_retrieval_url to service_url
    • Allow service_url to be NULL
    • Add provider_id column
    • Move primary key from address to provider_id
    • Rename address to owner
  • indexer_proof_set_rails
    • Rename table to data_sets
    • Rename column proof_set_id to data_set_id
    • Rename column payee to controller (for #111)
  • indexer_proof_sets
    • Merge with data_sets (based on key set_id)
  • indexer_roots
    • Rename table to pieces
    • Rename column root_id to piece_id
    • Rename column set_id to data_set_id
    • Rename column root_cid to piece_cid
  • proof_set_stats
    • Rename table to data_set_stats
    • Rename column set_id to data_set_id
  • retrieval_logs
    • Rename column proof_set_id to data_set_id

Workers (PR #1 @Julian Gruber)

https://github.com/filcdn/worker/pull/224

Indexer

  • Update smart contract addresses
  • Event PDPVerifier.ProofSetCreated
    • Rename endpoint to /pdp-verifier/data-set-created
    • Schema updates
  • Event PDPVerifier.RootsAdded
    • Rename endpoint to /pdp-verifier/roots-added
    • Call PDPVerifier.getPieceCid()
    • Schema updates
  • Event PDPVerifier.RootsRemoved
    • Rename endpoint to /pdp-verifier/pieces-removed
    • Schema updates
  • Event Pandora.ProofSetRailCreated
    • Rename endpoint to /filecoin-warm-storage-service/data-set-created
    • Get data_set_id not proof_set_id from event
    • Persist data_set_id not proof_set_id
    • Table now called data_sets
    • Read withCDN from new metadata fields
    • Other schema updates
  • Event Pandora.ProviderRegistered
    • Rename endpoint to /service-provider-registry/provider-registered
    • Remove writing of serviceURL, will only be known later
    • Instead of address, write provider_id and owner
    • Other schema updates
  • Event Pandora.ProviderRemoved
    • Rename endpoint to /service-provider-registry/provider-removed
    • Change query to select based on provider_id not address
    • Other schema updates
  • Events ServiceProviderRegistry.ServiceUpdated and ServiceProviderRegistry.ProductAdded
    • Create new endpoint that listens to /service-provider-registry/service-updated and /service-provider-registry/product-added
    • Call ServiceProviderRegistry.getPDPService() to get serviceURL
    • Persist new or updated serviceURL in provider_urls.service_url
    • Other schema updates

Retriever

  • Update smart contract addresses
  • Adjust to new D1 schema
    • Read provider_urls.service_url instead of provider_urls.piece_retrieval_url
      • Update URL construction based on read value
      • Handle service_url = NULL
    • Other schema updates

Bot (PR #2 @Srdjan Stankovic)

  • Contract ABI update
  • Schema updates

Main work items

  • Goldsky
  • Bot PR
  • Tracking PRs for components still in progress (these should be worked on after the main PR is ready)
    • Beneficiary separation PR
    • Key-value metadata PR
  • Mainnet deployment

Code sequence

What to work on in what order:

graph TD;

  Goldsky
  main[Main PR]
  bot[Bot PR]
  subgraph Tracking
    Beneficiary
    kv[Key-value metadata]
  end
  ct[(Contract deployment)]
  main --> Tracking
  ct --> Goldsky
  
  

Deployment sequence

What can be merged/shipped in what order:

graph TD;

  Goldsky
  main[Main PR]
  bot[Bot PR]
  subgraph Tracking
    Beneficiary
    kv[Key-value metadata]
  end
  ct[(Contract deployment)]
  ct --> Goldsky
  ct --> Tracking
  Goldsky --> main
  Tracking --> main
  main --> bot
  bot --> m[Mainnet deployment of everything, the above is calibration]
  
  

Branching strategy:

  1. Create M2.5 PR
  1. Create main PR targeting M2.5 PR
    1. Can be split into multiple PRs if possible
  1. Once the main PR is merged to M2.5 PR
  1. Create tracking PRs also targeting M2.5 PR
  1. Merge M2.5 PR into main
    1. This step will execute the database migrations

Rename databases:

  • filcdn-mainnet-db
  • filcdn-calibration-db